翻訳と辞書
Words near each other
・ "O" Is for Outlaw
・ "O"-Jung.Ban.Hap.
・ "Ode-to-Napoleon" hexachord
・ "Oh Yeah!" Live
・ "Our Contemporary" regional art exhibition (Leningrad, 1975)
・ "P" Is for Peril
・ "Pimpernel" Smith
・ "Polish death camp" controversy
・ "Pro knigi" ("About books")
・ "Prosopa" Greek Television Awards
・ "Pussy Cats" Starring the Walkmen
・ "Q" Is for Quarry
・ "R" Is for Ricochet
・ "R" The King (2016 film)
・ "Rags" Ragland
・ ! (album)
・ ! (disambiguation)
・ !!
・ !!!
・ !!! (album)
・ !!Destroy-Oh-Boy!!
・ !Action Pact!
・ !Arriba! La Pachanga
・ !Hero
・ !Hero (album)
・ !Kung language
・ !Oka Tokat
・ !PAUS3
・ !T.O.O.H.!
・ !Women Art Revolution


Dictionary Lists
翻訳と辞書 辞書検索 [ 開発暫定版 ]
スポンサード リンク

Frobenius inner product : ウィキペディア英語版
Matrix multiplication
In mathematics, matrix multiplication is a binary operation that takes a pair of matrices, and produces another matrix. Numbers such as the real or complex numbers can be multiplied according to elementary arithmetic. On the other hand, matrices are ''arrays of numbers'', so there is no unique way to define "the" multiplication of matrices. As such, in general the term "matrix multiplication" refers to a number of different ways to multiply matrices. The key features of any matrix multiplication include: the number of rows and columns the original matrices have (called the "size", "order" or "dimension"), and specifying how the entries of the matrices generate the new matrix.
Like vectors, matrices of any size can be multiplied by scalars, which amounts to multiplying every entry of the matrix by the same number. Similar to the entrywise definition of adding or subtracting matrices, multiplication of two matrices of the same size can be defined by multiplying the corresponding entries, and this is known as the Hadamard product. Another definition is the Kronecker product of two matrices, to obtain a block matrix.
One can form many other definitions. However, the most useful definition can be motivated by linear equations and linear transformations on vectors, which have numerous applications in applied mathematics, physics, and engineering. This definition is often called ''the'' matrix product. In words, if is an matrix and is an matrix, their matrix product is an matrix, in which the entries across the rows of are multiplied with the entries down the columns of (the precise definition is below).
This definition is not commutative, although it still retains the associative property and is distributive over entrywise addition of matrices. The identity element of the matrix product is the identity matrix (analogous to multiplying numbers by 1), and a square matrix may have an inverse matrix (analogous to the multiplicative inverse of a number). A consequence of the matrix product is determinant multiplicativity. The matrix product is an important operation in linear transformations, matrix groups, and the theory of group representations and irreps.
Computing matrix products is both a central operation in many numerical algorithms and potentially time consuming, making it one of the most well-studied problems in numerical computing. Various algorithms have been devised for computing , especially for large matrices.
This article will use the following notational conventions: matrices are represented by capital letters in bold, e.g. , vectors in lowercase bold, e.g. , and entries of vectors and matrices are italic (since they are scalars), e.g. and . Index notation is often the clearest way to express definitions, and is used as standard in the literature. The entry of matrix is indicated by or , whereas a numerical label (not matrix entries) on a collection of matrices is subscripted only, e.g. , etc.
== Scalar multiplication ==
(詳細はKronecker product.
The left scalar multiplication of a matrix with a scalar gives another matrix of the same size as . The entries of are defined by
: (\lambda \mathbf)_ = \lambda\left(\mathbf\right)_\,,
explicitly:
: \lambda \mathbf = \lambda \begin
A_ & A_ & \cdots & A_ \\
A_ & A_ & \cdots & A_ \\
\vdots & \vdots & \ddots & \vdots \\
A_ & A_ & \cdots & A_ \\
\end = \begin
\lambda A_ & \lambda A_ & \cdots & \lambda A_ \\
\lambda A_ & \lambda A_ & \cdots & \lambda A_ \\
\vdots & \vdots & \ddots & \vdots \\
\lambda A_ & \lambda A_ & \cdots & \lambda A_ \\
\end\,.
Similarly, the right scalar multiplication of a matrix with a scalar is defined to be
: (\mathbf\lambda)_ = \left(\mathbf\right)_ \lambda\,,
explicitly:
: \mathbf\lambda = \begin
A_ & A_ & \cdots & A_ \\
A_ & A_ & \cdots & A_ \\
\vdots & \vdots & \ddots & \vdots \\
A_ & A_ & \cdots & A_ \\
\end\lambda = \begin
A_ \lambda & A_ \lambda & \cdots & A_ \lambda \\
A_ \lambda & A_ \lambda & \cdots & A_ \lambda \\
\vdots & \vdots & \ddots & \vdots \\
A_ \lambda & A_ \lambda & \cdots & A_ \lambda \\
\end\,.
When the underlying ring is commutative, for example, the real or complex number field, these two multiplications are the same, and are simply called ''scalar multiplication''. However, for matrices over a more general ring that are ''not'' commutative, such as the quaternions, they may not be equal.
For a real scalar and matrix:
: \lambda = 2, \quad \mathbf =\begin
a & b \\
c & d \\
\end
: 2 \mathbf = 2 \begin
a & b \\
c & d \\
\end = \begin
2 \!\cdot\! a & 2 \!\cdot\! b \\
2 \!\cdot\! c & 2 \!\cdot\! d \\
\end = \begin
a \!\cdot\! 2 & b \!\cdot\! 2 \\
c \!\cdot\! 2 & d \!\cdot\! 2 \\
\end = \begin
a & b \\
c & d \\
\end2= \mathbf2.
For quaternion scalars and matrices:
: \lambda = i, \quad \mathbf = \begin
i & 0 \\
0 & j \\
\end
:
i\begin
i & 0 \\
0 & j \\
\end
= \begin
i^2 & 0 \\
0 & ij \\
\end
= \begin
-1 & 0 \\
0 & k \\
\end
\ne \begin
-1 & 0 \\
0 & -k \\
\end
= \begin
i^2 & 0 \\
0 & ji \\
\end
= \begin
i & 0 \\
0 & j \\
\endi\,,

where are the quaternion units. The non-commutativity of quaternion multiplication prevents the transition of changing to .

抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)
ウィキペディアで「Matrix multiplication」の詳細全文を読む



スポンサード リンク
翻訳と辞書 : 翻訳のためのインターネットリソース

Copyright(C) kotoba.ne.jp 1997-2016. All Rights Reserved.